Video Search and High - Level Feature Extraction
نویسندگان
چکیده
A_CL1_1: choose the best-performing classifier for each concept from all the following runs and an event detection method. A_CL2_2: (visual-based) choose the best-performing visual-based classifier for each concept from runs A_CL4_4, A_CL5_5, and A_CL6_6. A_CL3_3: (visual-text) weighted average fusion of visual-based classifier A_CL4_4 with a text classifier. A_CL4_4: (visual-based) average fusion of A_CL5_5 with a lexicon-spatial pyramid matching approach that incorporates local feature and pyramid matching. A_CL5_5:(visual-based) average fusion of A_CL6_6 with a context-based concept fusion approach that incorporates inter-conceptual relationships to help detect individual concepts. A_CL6_6: (visual-based) average fusion of two SVM baseline classification results for each concept: (1) a fused classification result by averaging 3 single SVM classification results. Each SVM classifier uses one type of color/texture/edge features and is trained on the whole training set with 40% negative samples; (2) a single SVM classifier using color and texture trained on the whole training set (90 videos) with 20% negative samples.
منابع مشابه
Zhejiang University at TRECVID 2006
We participated in the high-level feature extraction and interactive-search task for TRECVID 2006. Interaction and integration of multi-modality media types such as visual, audio and textual data in video are the essence of video content analysis. Although any uni-modality type partially expresses limited semantics less or more, video semantics are fully manifested only by interaction and integ...
متن کاملMSRA-USTC-SJTU at TRECVID 2007: High-Level Feature Extraction and Search
This paper describes the MSRA-USTC-SJTU experiments for TRECVID 2007. We performed the experiments in high-level feature extraction and automatic search tasks. For high-level feature extraction, we investigated the benefit of unlabeled data by semi-supervised learning, and the multi-layer (ML) multi-instance (MI) relation embedded in video by MLMI kernel, as well as the correlations between con...
متن کاملNational Institute of Informatics, Japan at TRECVID 2008
This paper reports our experiments for TRECVID 2009 tasks: high level feature extraction, search and contentbased copy detection. For the high level feature extraction task, we used the baseline features such as color moments, edge orientation histogram, local binary patterns and local features trained with SVM classifiers and nearest neighbor classifiers. For the search task, we used . Concern...
متن کاملVideo Understanding and Content-Based Retrieval
This year, the joint team of UCF and the University of Modena has participated in the following tasks: (1) shot boundary detection, (2) low-level feature extraction, (3) high-level feature extraction, (4) topic search and (5) BBC rushes management. The shot boundary detection was contributed by the Image Lab at the University of Modena. The other tasks were performed by the Computer Vision Team...
متن کاملBeyond Semantic Search: What You Observe May Not Be What You Think
This paper presents our approaches and results of the four TRECVID 2008 tasks we participated in: high-level feature extraction, automatic video search, video copy detection, and rushes summarization. In high-level feature extraction, we jointly submitted our results with Columbia University. The four runs submitted through CityU aim to explore context-based concept fusion by modeling inter-con...
متن کاملAccelerating Video Feature Extractions in CBVIR on Multi-core Systems
With the explosive increase in video data, automatic video management (search/retrieval) is becoming a mass market application, and Content-Based Video Information Retrieval (CBVIR) is one of the best solutions. Most CBVIR systems are based on low-level feature extractions guided by the MPEG-7 standard for high-level semantic concept indexing. It is well known that CBVIR is a very compute-inten...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005